133 research outputs found

    Complexity Hierarchies and Higher-order Cons-free Term Rewriting

    Get PDF
    Constructor rewriting systems are said to be cons-free if, roughly, constructor terms in the right-hand sides of rules are subterms of the left-hand sides; the computational intuition is that rules cannot build new data structures. In programming language research, cons-free languages have been used to characterize hierarchies of computational complexity classes; in term rewriting, cons-free first-order TRSs have been used to characterize the class PTIME. We investigate cons-free higher-order term rewriting systems, the complexity classes they characterize, and how these depend on the type order of the systems. We prove that, for every K ≥\geq 1, left-linear cons-free systems with type order K characterize EK^KTIME if unrestricted evaluation is used (i.e., the system does not have a fixed reduction strategy). The main difference with prior work in implicit complexity is that (i) our results hold for non-orthogonal term rewriting systems with no assumptions on reduction strategy, (ii) we consequently obtain much larger classes for each type order (EK^KTIME versus EXPK−1^{K-1}TIME), and (iii) results for cons-free term rewriting systems have previously only been obtained for K = 1, and with additional syntactic restrictions besides cons-freeness and left-linearity. Our results are among the first implicit characterizations of the hierarchy E = E1^1TIME ⊊\subsetneq E2^2TIME ⊊\subsetneq ... Our work confirms prior results that having full non-determinism (via overlapping rules) does not directly allow for characterization of non-deterministic complexity classes like NE. We also show that non-determinism makes the classes characterized highly sensitive to minor syntactic changes like admitting product types or non-left-linear rules.Comment: extended version of a paper submitted to FSCD 2016. arXiv admin note: substantial text overlap with arXiv:1604.0893

    Complexity Hierarchies and Higher-Order Cons-Free Rewriting

    Get PDF
    Constructor rewriting systems are said to be cons-free if, roughly, constructor terms in the right-hand sides of rules are subterms of constructor terms in the left-hand side; the computational intuition is that rules cannot build new data structures. It is well-known that cons-free programming languages can be used to characterize computational complexity classes, and that cons-free first-order term rewriting can be used to characterize the set of polynomial-time decidable sets. We investigate cons-free higher-order term rewriting systems, the complexity classes they characterize, and how these depend on the order of the types used in the systems. We prove that, for every k ≥\geq 1, left-linear cons-free systems with type order k characterize Ek^kTIME if arbitrary evaluation is used (i.e., the system does not have a fixed reduction strategy). The main difference with prior work in implicit complexity is that (i) our results hold for non-orthogonal term rewriting systems with possible rule overlaps with no assumptions about reduction strategy, (ii) results for such term rewriting systems have previously only been obtained for k = 1, and with additional syntactic restrictions on top of cons-freeness and left-linearity. Our results are apparently among the first implicit characterizations of the hierarchy E = E1^1TIME ⊆\subseteq E2^2TIME ⊆\subseteq .... Our work confirms prior results that having full non-determinism (via overlaps of rules) does not directly allow characterization of non-deterministic complexity classes like NE. We also show that non-determinism makes the classes characterized highly sensitive to minor syntactic changes such as admitting product types or non-left-linear rules.Comment: Extended version (with appendices) of a paper published in FSCD 201

    Weak convergence and uniform normalization in infinitary rewriting

    Get PDF
    We study infinitary term rewriting systems containing finitely many rules. For these, we show that if a weakly convergent reduction is not strongly convergent, it contains a term that reduces to itself in one step (but the step itself need not be part of the reduction). Using this result, we prove the starkly surprising result that for any orthogonal system with finitely many rules, the system is weakly normalizing under weak convergence if{f} it is strongly normalizing under weak convergence if{f} it is weakly normalizing under strong convergence if{f} it is strongly normalizing under strong convergence. As further corollaries, we derive a number of new results for weakly convergent rewriting: Systems with finitely many rules enjoy unique normal forms, and acyclic orthogonal systems are confluent. Our results suggest that it may be possible to recover some of the positive results for strongly convergent rewriting in the setting of weak convergence, if systems with finitely many rules are considered. Finally, we give a number of counterexamples showing failure of most of the results when infinite sets of rules are allowed

    The Expressive Power of One Variable Used Once: The Chomsky Hierarchy and First-Order Monadic Constructor Rewriting

    Get PDF
    We study the implicit computational complexity of constructor term rewriting systems where every function and constructor symbol is unary or nullary. Surprisingly, adding simple and natural constraints to rule formation yields classes of systems that accept exactly the four classes of languages in the Chomsky hierarchy

    Evaluation Measures for Relevance and Credibility in Ranked Lists

    Full text link
    Recent discussions on alternative facts, fake news, and post truth politics have motivated research on creating technologies that allow people not only to access information, but also to assess the credibility of the information presented to them by information retrieval systems. Whereas technology is in place for filtering information according to relevance and/or credibility, no single measure currently exists for evaluating the accuracy or precision (and more generally effectiveness) of both the relevance and the credibility of retrieved results. One obvious way of doing so is to measure relevance and credibility effectiveness separately, and then consolidate the two measures into one. There at least two problems with such an approach: (I) it is not certain that the same criteria are applied to the evaluation of both relevance and credibility (and applying different criteria introduces bias to the evaluation); (II) many more and richer measures exist for assessing relevance effectiveness than for assessing credibility effectiveness (hence risking further bias). Motivated by the above, we present two novel types of evaluation measures that are designed to measure the effectiveness of both relevance and credibility in ranked lists of retrieval results. Experimental evaluation on a small human-annotated dataset (that we make freely available to the research community) shows that our measures are expressive and intuitive in their interpretation

    The exact hardness of deciding derivational and runtime complexity

    Get PDF
    For any class C of computable total functions satisfying some mild conditions, we prove that the following decision problems are complete for the existential part of the second level of the arithmetical hierarchy: (A) Deciding whether a term rewriting system (TRS for short) has runtime complexity bounded by a function in C. (B) Deciding whether a TRS has derivational complexity bounded by a function in C. In particular, the problems of deciding whether a TRS has polynomially (exponentially) bounded runtime complexity (respectively derivational complexity) are complete for this level of the arithmetical ierarchy. This places deciding polynomial derivational or runtime complexity of TRSs at the same level as deciding nontermination or nonconfluence of TRSs. We proceed to show that the related problem of deciding for a single computable function f whether a TRS has runtime complexity bounded from above by f is complete for the universal part of the first level of the arithmetical hierarchy. We further prove that analysing the implicit complexity of TRSs is even more difficult: The problem of deciding whether a TRS accepts a language of terms accepted by some TRS with runtime complexity bounded by a function in C is complete for the existential part of the third level of the arithmetical hierarchy. All of our results are easily extended to the notion of minimal complexity (where the length of shortest reductions to normal form is considered) and remain valid under any computable reduction strategy. Finally, all results hold both for unrestricted TRSs and for the class of orthogonal TRSs

    Characterizing Languages by Normalization and Termination in String Rewriting

    Get PDF

    A Complete Characterization of Infinitely Repeated Two-Player Games having Computable Strategies with no Computable Best Response under Limit-of-Means Payoff

    Full text link
    It is well-known that for infinitely repeated games, there are computable strategies that have best responses, but no computable best responses. These results were originally proved for either specific games (e.g., Prisoner's dilemma), or for classes of games satisfying certain conditions not known to be both necessary and sufficient. We derive a complete characterization in the form of simple necessary and sufficient conditions for the existence of a computable strategy without a computable best response under limit-of-means payoff. We further refine the characterization by requiring the strategy profiles to be Nash equilibria or subgame-perfect equilibria, and we show how the characterizations entail that it is efficiently decidable whether an infinitely repeated game has a computable strategy without a computable best response

    Entropy and Graph Based Modelling of Document Coherence using Discourse Entities: An Application

    Full text link
    We present two novel models of document coherence and their application to information retrieval (IR). Both models approximate document coherence using discourse entities, e.g. the subject or object of a sentence. Our first model views text as a Markov process generating sequences of discourse entities (entity n-grams); we use the entropy of these entity n-grams to approximate the rate at which new information appears in text, reasoning that as more new words appear, the topic increasingly drifts and text coherence decreases. Our second model extends the work of Guinaudeau & Strube [28] that represents text as a graph of discourse entities, linked by different relations, such as their distance or adjacency in text. We use several graph topology metrics to approximate different aspects of the discourse flow that can indicate coherence, such as the average clustering or betweenness of discourse entities in text. Experiments with several instantiations of these models show that: (i) our models perform on a par with two other well-known models of text coherence even without any parameter tuning, and (ii) reranking retrieval results according to their coherence scores gives notable performance gains, confirming a relation between document coherence and relevance. This work contributes two novel models of document coherence, the application of which to IR complements recent work in the integration of document cohesiveness or comprehensibility to ranking [5, 56]
    • …
    corecore